OneStopTesting - Quality Testing Jobs, eBooks, Articles, FAQs, Training Institutes, Testing Software, Testing downloads, testing news, testing tools, learn testing, manual testing, automated testing, load runner, winrunner, test director, silk test, STLC

Forum| Contact Us| Testimonials| Sitemap| Employee Referrals| News| Articles| Feedback| Enquiry
 
Testing Resources
 
  • Testing Articles
  • Testing Books
  • Testing Certification
  • Testing FAQs
  • Testing Downloads
  • Testing Interview Questions
  • Career In Software Testing
  • Testing Jobs
  • Testing Job Consultants
  • Testing News
  • Testing Training Institutes
  •  
    Fundamentals
     
  • Introduction
  • Designing Test Cases
  • Developing Test Cases
  • Writing Test Cases
  • Test Case Templates
  • Purpose
  • What Is a Good Test Case?
  • Test Specifications
  • UML
  • Scenario Testing
  • Test Script
  • Test Summary Report
  • Test Data
  • Defect Tracking
  •  
    Software testing
     
  • Testing Forum
  • Introduction
  • Testing Start Process
  • Testing Stop Process
  • Testing Strategy
  • Risk Analysis
  • Software Listings
  • Test Metrics
  • Release Life Cycle
  • Interoperability Testing
  • Extreme Programming
  • Cyclomatic Complexity
  • Equivalence Partitioning
  • Error Guessing
  • Boundary Value Analysis
  • Traceability Matrix
  •  
    SDLC Models
     
  • Introduction
  • Waterfall Model
  • Iterative Model
  • V-Model
  • Spiral Model
  • Big Bang Model
  • RAD Model
  • Prototyping Model
  •  
    Software Testing Types
     
  • Static Testing
  • Dynamic Testing
  • Blackbox Testing
  • Whitebox Testing
  • Unit Testing
  • Requirements Testing
  • Regression Testing
  • Error Handling Testing
  • Manual support Testing
  • Intersystem Testing
  • Control Testing
  • Parallel Testing
  • Volume Testing
  • Stress Testing
  • Performance Testing
  • Agile Testing
  • Localization Testing
  • Globalization Testing
  • Internationalization Testing
  •  
    Test Plan
     
  • Introduction
  • Test Plan Development
  • Test Plan Template
  • Regional Differences
  • Criticism
  • Hardware Development
  • IEEE 829-1998
  • Testing Without a TestPlan
  •  
    Code Coverage
     
  • Introduction
  • Measures
  • Working
  • Statement Coverage
  • Branch Coverage
  • Path Coverage
  • Coverage criteria
  • Code coverage in practice
  • Tools
  • Features
  •  
    Quality Management
     
  • Introduction
  • Components
  • Capability Maturity Model
  • CMMI
  • Six Sigma
  •  
    Project Management
     
  • Introduction
  • PM Activities
  • Project Control Variables
  • PM Methodology
  • PM Phases
  • PM Templates
  • Agile PM
  •  
    Automated Testing Tools
     
  • Quick Test Professional
  • WinRunner
  • LoadRunner
  • Test Director
  • Silk Test
  • Test Partner
  • Rational Robot
  •  
    Performance Testing Tools
     
  • Apache JMeter
  • Rational Performance Tester
  • LoadRunner
  • NeoLoad
  • WAPT
  • WebLOAD
  • Loadster
  • OpenSTA
  • LoadUI
  • Appvance
  • Loadstorm
  • LoadImpact
  • QEngine
  • Httperf
  • CloudTest
  •  
    Languages
     
  • Perl Testing
  • Python Testing
  • JUnit Testing
  • Unix Shell Scripting
  •  
    Automation Framework
     
  • Introduction
  • Keyword-driven Testing
  • Data-driven Testing
  •  
    Configuration Management
     
  • History
  • What is CM?
  • Meaning of CM
  • Graphically Representation
  • Traditional CM
  • CM Activities
  • Tools
  •  
    Articles
     
  • What Is Software Testing?
  • Effective Defect Reports
  • Software Security
  • Tracking Defects
  • Bug Report
  • Web Testing
  • Exploratory Testing
  • Good Test Case
  • Write a Test
  • Code Coverage
  • WinRunner vs. QuickTest
  • Web Testing Tools
  • Automated Testing
  • Testing Estimation Process
  • Quality Assurance
  • The Interview Guide
  • Upgrade Path Testing
  • Priority and Severity of Bug
  • Three Questions About Bug
  •    
     
    Home » Testing Articles » Testing - General Articles » Agile Testing

    Agile Testing

    A D V E R T I S E M E N T



    I was at a customer site not so long ago giving a course on Agile Software Development and part of the course was an introduction to test-driven development. (TDD) TDD is a process whereby the requirements are specified as a set of tests and the developers use the number of tests passing, or failing, to measure the amount of progress in the system. In the middle of one of my talks, the head of testing rose from his seat and asked; "So you′re saying that we should let the developers know what the tests are before they even start coding?" After I replied in the affirmative he responded with, "That would be cheating! If we did that, the developers would then only write code to pass the tests!"

    That particular manager′s opinion is one I′ve found to be reasonably common among testers and it′s one I′ve always found difficult to understand. There seems to be a general rule in some organisations that once the requirements have been captured, there should be no communication between developers and testers until the day the code is finished and ready for testing. On that day the code is signed off by development and handed over to testing only for it to be rejected and returned because of the number of defects in it. Defects the developers weren′t even aware were defects in a lot of cases. It′s been said that, in many projects, this is where design and coding really start. This is where the developers finally discover what the application is meant to do and, just as importantly, meant to not do.

    This is often the point in the project lifecycle where the blame and recrimination wars begin too. The developers insist their interpretation of the requirements is the correct one but the testers completely disagree and so the system fails the tests but each side refuses to admit being in the wrong. Is it any wonder that in many companies, there is no love lost between the two factions?

    How does this occur? Both sides have almost completely different views of what the system should do but were both subject to the same set of requirements. A set of requirements that were captured and documented in a manner that was specifically intended to enable them to be understood by everybody and prevent any equivocation or ambiguity in them.

    The problem is partly the ambiguity of language. Although we have expressions like "plain English", the English language is far from plain, and I′m fairly certain this is true for every other language on the planet too. Languages and the rules governing their usage are complex. The meanings of words often change depending on the context in which they are used. Sometimes the context is explicitly communicated along with the words, other times it is tacit and the speaker expects it to be inferred by the listener. The speaker may use body language or give emotional clues to give the listener additional contextual information.

    In his book, User Stories Applied, Mike Cohn uses ′buffalo′ as an example of a word that can have many meanings. It is, as he says, a bison-like animal but dictionary.com also defines it as a verb with another two meanings; to bully, intimidate; or to deceive, confuse or bewilder. In addition, Buffalo is also a city in the state of New York, so a valid sentence using these meanings could be "Buffalo buffalo buffalo and buffalo Buffalo buffalo". My grammar checker doesn′t like that at all and complains that the word buffalo is repeated too many times. However, it doesn′t know English as well as we do and so isn′t able to figure out that this is, indeed, a perfectly legitimate statement meaning; "Bison from a city in New York state intimidate and confuse other bison from the same city." We are able to understand it because we are aware of the context surrounding it.

    An interesting and somewhat humorous example, if somewhat contrived, but it demonstrates how even a perfectly spelt, punctuated and grammatically correct sentence can be impenetrable without context. Certainly impenetrable to my grammar checker and probably most humans too.

    We also see another phenomenon in effect here. When faced with information that is incomplete, we have a tendency to fill the gaps with assumptions based on our own past experiences. We then process the information and use the conclusions for our next set of actions, which may include gathering further incomplete information, filling in the gaps and performing more processing. On and on we continue and with each step we climb further up the ′ladder of inference′. Because the experiences of each human being are unique no two people will climb the ladder in the same way and so each will reach different conclusions. The more incomplete the original information is and the more gaps that are filled with personal assumptions, the more we become convinced that our, and only our, conclusions are the correct ones. Fortunately, we share a lot of culture and experiences with our colleagues, so we make similar assumptions and when we climb the ladder our conclusions shouldn′t be too different to theirs. Developers and testers though, are not always immediate colleagues. Often they belong to separate departments with separate offices and sometimes even separate buildings. The physical distance between them and the competition between the two factions will make their views of the world even more disparate.

    The third part of the problem is that the requirements document is an artefact that forms the basis of a contract. If we′re working to a fixed-scope, fixed-price contract it is this very document that defines the extent of the scope. According to Barry Boehm′s famous exponential cost-of-change curve, the cost of changes to the specification, increases by a factor of ten each time the project moves through a stage in the development cycle. At the very beginning of any project longer than say a month, it is extremely unlikely, if not impossible, for the customer to know what will be required at the end of the project. If the customer or the analyst get any of the requirements wrong or omits them in the requirements gathering phase, there will be a heavy cost to pay for adding or changing them later. Given this set of circumstances, the optimal strategy for the person preparing the document is to couch the requirements in as vague terms possible. The use of ambiguity gives us the chance to argue the precise detail later when we have more knowledge about the system.

    Three problems that lead to failures near the end of the project. Just the place where the cost of change curve says failures are the most expensive to fix and just as we′d planned to hand the project over to the customer. In fact failure often occurs at the very last place we want, or can afford to fail but the causes of failure are inherent in the methods we use to plan and implement our projects. In effect, we actually plan to fail when we are at our most vulnerable!

    Earlier in this article, we proposed that the testing phase is often when the developers really start to find out what the project is really meant to do. If that is the case, would it not make more sense to start the testing phase at the beginning of the project? This may sound strange and counter-intuitive to a lot people, how can we test something that doesn′t yet exist, but should make perfect sense to anyone with management training. They will know that quality cannot be inspected into a product after production, it can only be built in. The most important time for any defect is the twenty-four hours after it is created. If the defect is caught within that twenty-four hours the cost of fixing it is negligible compared with the cost of fixing it later after more code has been written on top of it. This can only happen if both the tests and testers are available to the developers from the very start of the project.

    Testing from the beginning of the project and continually testing throughout the project lifecycle is the basis of agile testing. If we can work with the customer to help him specify his requirements in terms of tests it makes them completely unambiguous, the tests either pass or they don′t. If our coders only write code to pass tests, we can be sure of one hundred percent test coverage. Most of all, if we keep our testers, developers and customers (or customer representatives) in constant face-to-face communication with each other, we can eradicate most of the errors caused by us climbing the ladder of inference. Breaking our projects into smaller chunks of work and iterating them will give us frequent feedback on the current state of the project.

    There are many teams now using agile testing techniques to improve the quality of their products and having great success. There is some investment in training required and changes to the workspace are necessary to allow customers, testers, and developers to work side-by-side but these are a small price to pay for the advantages gained.

    The most difficult thing for most teams is shifting the perception of the test team competing with the developers where their focus is detecting faults and preventing poor quality products from being released. The new, agile testing, paradigm is the test team collaborating with the developers to build quality in from the start and release robust products that deliver the best possible business value for the customer.

    References
    � User Stories Applied, Cohn, M. Addison-Wesley Professional, 2004
    � Software Engineering Economics, Boehm, B, Prentice Hall, 1982

    About David Putman
    David′s role as Mentor has taken him to a variety of organisations, where he has acted as an advisor on the management of software development projects to companies in three continents. His work continues to give him interesting and practical examples of all kinds of management and software development issues.

    David regularly presents papers and tutorials on the management and practice of software development at national and international events. He used to write the "Models and Methodologies" column for Application Development Advisor magazine and has had many articles published in other publications including the Cutter IT Journal.

    His main interests are the management of people and software development projects, learning organisations, and making work satisfying to all those involved.

    About Charlie Poole
    Charlie Poole has spent more than 30 years as a software developer, designer, project manager, trainer and coach with a long career in the government sector. He has managed an independent consultancy in the US since 1995, with clients ranging from government agencies to Internet start-ups.

    Charlie′s technical background is very broad. In recent years, he has specialized in Windows development using C++ and C#. He is an experienced COM and COM+ developer and has worked with the .NET framework since its inception. He is one of the authors of the NUnit .NET testing framework and contributes to several other Open Source tool projects.

    Combining years of experience with traditional approaches with an avid interest in Agile methods, Charlie is a practitioner and coach of Extreme Programming and a certified ScrumMaster. He is a familiar presence at Agile conferences in North America and Europe and participates in numerous panels and workshops



    More Testing - General Articles
    1 2 3 4 5 6 7 8 9 10 11 Next



    discussionDiscussion Center
    Discuss
    Discuss

    Query

    Feedback
    Yahoo Groups
    Y! Group
    Sirfdosti Groups
    Sirfdosti
    Contact Us
    Contact

    Looking for Software Testing eBooks and Interview Questions? Join now and get it FREE!
     
    A D V E R T I S E M E N T
       
       

    Members Login


    Email ID:
    Password:


    Forgot Password
    New User
       
       
    Testing Interview Questions
  • General Testing
  • Automation Testing
  • Manual Testing
  • Software Development Life Cycle
  • Software Testing Life Cycle
  • Testing Models
  • Automated Testing Tools
  • Silk Test
  • Win Runner
  •    
       
    Testing Highlights

  • Software Testing Ebooks
  • Testing Jobs
  • Testing Frequently Asked Questions
  • Testing News
  • Testing Interview Questions
  • Testing Jobs
  • Testing Companies
  • Testing Job Consultants
  • ISTQB Certification Questions
  •    
       
    Interview Questions

  • WinRunner
  • LoadRunner
  • SilkTest
  • TestDirector
  • General Testing Questions
  •    
       
    Resources

  • Testing Forum
  • Downloads
  • E-Books
  • Testing Jobs
  • Testing Interview Questions
  • Testing Tools Questions
  • Testing Jobs
  • A-Z Knowledge
  •    
    Planning
    for
    Study ABROAD ?


    Study Abroad


    Vyom Network : Free SMS, GRE, GMAT, MBA | Online Exams | Freshers Jobs | Software Downloads | Programming & Source Codes | Free eBooks | Job Interview Questions | Free Tutorials | Jokes, Songs, Fun | Free Classifieds | Free Recipes | Bangalore Info | GATE Preparation | MBA Preparation | Free SAP Training
    Privacy Policy | Terms and Conditions
    Sitemap | Sitemap (XML)
    Job Interview Questions | Placement Papers | SMS Jokes | C++ Interview Questions | C Interview Questions | Web Hosting
    German | French | Portugese | Italian